35 research outputs found

    Ozone deposition over a boreal lake by the eddy covariance method

    Get PDF
    Deposition is the main removal process of ground-level ozone. In some boreal areas, lakes accounts for up to 30% of the areas. However, there have been only few studies on ozone deposition over lake water in the past forty years. So far only one study has measured the ozone deposition velocity over lake (vd) with the eddy covariance techniques. The 42-day campaign described in this thesis was held in August and September in 2012 in the Lake Kuivajärvi at SMEAR II station in Hyytiälä, Finland. The results showed a mean vd of 0.88±0.05 mm s–1, which was one-fifth of that over forest. vd performed a weak diurnal cycle over lake which had a peak during the nighttime, while the forest showed the opposite. The lake data was classified into daytime and nighttime by a threshold of solar elevation angle (–2°). The two sets of data differed statistically by Mann-Whitney U test. A further analysis showed the higher vd at night might be attributed to the more unstable atmospheric condition. Although, there is no evidence supporting a correlation of vd with stability of the mixing layer in the lake, the dominance of mechanically-induced turbulence appeared to suppress vd. By comparing with previous studies, elevated wind did not facilitate the rate of ozone deposition as expected

    Derivation of Black Carbon Proxies in an Integrated Urban Air Quality Monitoring Network

    Get PDF
    Air pollution is one of the biggest environmental health challenges in the world, especially in the urban regions where about 90% of the world’s population lives. Black carbon (BC) has been demonstrated to play an important role in climate change, air quality and potential risk for human beings. BC has also been suggested to associate better with health effects of aerosol particles than the commonly monitored particulate matter, which does not solely originate from combustion sources. Furthermore, BC has been recommended to be included as one of the parameters in air quality index (AQI) which is communicated to citizens. However, due to financial constraints and the lack of the national legislation, BC has yet been measured in every air quality monitoring station. Therefore, some researchers developed low-cost sensors which give indicative ambient BC concentrations as an alternative. Even so, due to instrument failure or data corruption, measurements by physical sensors are not always possible and long data gaps can exist. With missing data, the data analysis of interactions between air pollutants becomes more uncertain; therefore, air quality models are needed for data gap imputation and, moreover, for sensor virtualization. To complement the current deficiency, this thesis aims to derive statistical proxies as virtual sensors to estimate BC by using the current air quality monitoring network in Helsinki metropolitan area (HMA). To achieve this, we first characterized the ambient BC concentrations in four types of environments in HMA: traffic site (TR: 0.77–2.08 μg m−3), urban background (UB: 0.51–0.53 μg m−3), detached housing (DH: 0.64–0.80 μg m−3) and regional background (RB: 0.27–0.28 μg m−3). TR, in general, had higher BC concentrations due to the close proximity to vehicular emission but decreasing trends (–10.4 % yr−1) likely thanks to the fast renewal of the city bus fleet in HMA. UB, on the other hand, had a more diverse source of BC, including biomass burning and traffic combustion. Its trend had also been decreasing, but at a smaller rate (e.g. UB1: –5.9 % yr−1). We then narrowed down the dataset to a street canyon site and an urban background site for BC proxy derivation. At both sites, despite the low correlation with meteorological factors, BC correlated well with other commonly monitored air pollutant parameters by both reference instruments and low-cost sensors, such as NOx and PM2.5. Based on this close association, we developed a statistical proxy with adaptive selection of input variables, named input-adaptive proxy (IAP). This white-box model worked better in terms of accuracy at the street canyon site (R2 = 0.81–0.87) than the urban background site (R2 = 0.44–0.60) because of the scarce missing gaps in data in the street canyon. When compared with other white- and black-box models, IAP is preferred because of its flexibility and architectural transparency. We further demonstrated the feasibility of sensor virtualization by using statistical proxies like IAP at both sites. We also stressed that such virtual sensors are location specific, but it might be possible to extend the models from one street canyon site to another with a calibration factor. Similarly, the proposed methodology can also be applied to estimate other air pollutant parameters with scarcity of data, such as lung deposited surface area and ultrafine particles, to complement the existing AQI.

    Sensitivity Analysis for Predicting Sub-Micron Aerosol Concentrations Based on Meteorological Parameters

    Get PDF
    Sub-micron aerosols are a vital air pollutant to be measured because they pose health effects. These particles are quantified as particle number concentration (PN). However, PN measurements are not always available in air quality measurement stations, leading to data scarcity. In order to compensate this, PN modeling needs to be developed. This paper presents a PN modeling framework using sensitivity analysis tested on a one year aerosol measurement campaign conducted in Amman, Jordan. The method prepares a set of different combinations of all measured meteorological parameters to be descriptors of PN concentration. In this case, we resort to artificial neural networks in the forms of a feed-forward neural network (FFNN) and a time-delay neural network (TDNN) as modeling tools, and then, we attempt to find the best descriptors using all these combinations as model inputs. The best modeling tools are FFNN for daily averaged data (with R 2 = 0.77 ) and TDNN for hourly averaged data (with R 2 = 0.66 ) where the best combinations of meteorological parameters are found to be temperature, relative humidity, pressure, and wind speed. As the models follow the patterns of diurnal cycles well, the results are considered to be satisfactory. When PN measurements are not directly available or there are massive missing PN concentration data, PN models can be used to estimate PN concentration using available measured meteorological parameters

    Sensitivity Analysis for Predicting Sub-Micron Aerosol Concentrations Based on Meteorological Parameters

    Get PDF
    Sub-micron aerosols are a vital air pollutant to be measured because they pose health effects. These particles are quantified as particle number concentration (PN). However, PN measurements are not always available in air quality measurement stations, leading to data scarcity. In order to compensate this, PN modeling needs to be developed. This paper presents a PN modeling framework using sensitivity analysis tested on a one year aerosol measurement campaign conducted in Amman, Jordan. The method prepares a set of different combinations of all measured meteorological parameters to be descriptors of PN concentration. In this case, we resort to artificial neural networks in the forms of a feed-forward neural network (FFNN) and a time-delay neural network (TDNN) as modeling tools, and then, we attempt to find the best descriptors using all these combinations as model inputs. The best modeling tools are FFNN for daily averaged data (with R 2 = 0.77 ) and TDNN for hourly averaged data (with R 2 = 0.66 ) where the best combinations of meteorological parameters are found to be temperature, relative humidity, pressure, and wind speed. As the models follow the patterns of diurnal cycles well, the results are considered to be satisfactory. When PN measurements are not directly available or there are massive missing PN concentration data, PN models can be used to estimate PN concentration using available measured meteorological parameters

    Data imputation in in situ-measured particle size distributions by means of neural networks

    Get PDF
    In air quality research, often only size-integrated particle mass concentrations as indicators of aerosol particles are considered. However, the mass concentrations do not provide sufficient information to convey the full story of fractionated size distribution, in which the particles of different diameters (Dp) are able to deposit differently on respiratory system and cause various harm. Aerosol size distribution measurements rely on a variety of techniques to classify the aerosol size and measure the size distribution. From the raw data the ambient size distribution is determined utilising a suite of inversion algorithms. However, the inversion problem is quite often ill-posed and challenging to solve. Due to the instrumental insufficiency and inversion limitations, imputation methods for fractionated particle size distribution are of great significance to fill the missing gaps or negative values. The study at hand involves a merged particle size distribution, from a scanning mobility particle sizer (NanoSMPS) and an optical particle sizer (OPS) covering the aerosol size distributions from 0.01 to 0.42 µm (electrical mobility equivalent size) and 0.3 to 10 µm (optical equivalent size) and meteorological parameters collected at an urban background region in Amman, Jordan, in the period of 1 August 2016–31 July 2017. We develop and evaluate feed-forward neural network (FFNN) approaches to estimate number concentrations at particular size bin with (1) meteorological parameters, (2) number concentration at other size bins and (3) both of the above as input variables. Two layers with 10–15 neurons are found to be the optimal option. Worse performance is observed at the lower edge (0.01<Dp<0.02 µm), the mid-range region (0.15<Dp<0.5 µm) and the upper edge (6<Dp<10 µm). For the edges at both ends, the number of neighbouring size bins is limited, and the detection efficiency by the corresponding instruments is lower compared to the other size bins. A distinct performance drop over the overlapping mid-range region is due to the deficiency of a merging algorithm. Another plausible reason for the poorer performance for finer particles is that they are more effectively removed from the atmosphere compared to the coarser particles so that the relationships between the input variables and the small particles are more dynamic. An observable overestimation is also found in the early morning for ultrafine particles followed by a distinct underestimation before midday. In the winter, due to a possible sensor drift and interference artefacts, the estimation performance is not as good as the other seasons. The FFNN approach by meteorological parameters using 5 min data (R2= 0.22–0.58) shows poorer results than data with longer time resolution (R2= 0.66–0.77). The FFNN approach using the number concentration at the other size bins can serve as an alternative way to replace negative numbers in the size distribution raw dataset thanks to its high accuracy and reliability (R2= 0.97–1). This negative-number filling approach can maintain a symmetric distribution of errors and complement the existing ill-posed built-in algorithm in particle sizer instruments.Peer reviewe

    COVID-19 Pandemic Development in Jordan-Short-Term and Long-Term Forecasting

    Get PDF
    In this study, we proposed three simple approaches to forecast COVID-19 reported cases in a Middle Eastern society (Jordan). The first approach was a short-term forecast (STF) model based on a linear forecast model using the previous days as a learning data-base for forecasting. The second approach was a long-term forecast (LTF) model based on a mathematical formula that best described the current pandemic situation in Jordan. Both approaches can be seen as complementary: the STF can cope with sudden daily changes in the pandemic whereas the LTF can be utilized to predict the upcoming waves’ occurrence and strength. As such, the third approach was a hybrid forecast (HF) model merging both the STF and the LTF models. The HF was shown to be an efficient forecast model with excellent accuracy. It is evident that the decision to enforce the curfew at an early stage followed by the planned lockdown has been effective in eliminating a serious wave in April 2020. Vaccination has been effective in combating COVID-19 by reducing infection rates. Based on the forecasting results, there is some possibility that Jordan may face a third wave of the pandemic during the Summer of 2021.In this study, we proposed three simple approaches to forecast COVID-19 reported cases in a Middle Eastern society (Jordan). The first approach was a short-term forecast (STF) model based on a linear forecast model using the previous days as a learning data-base for forecasting. The second approach was a long-term forecast (LTF) model based on a mathematical formula that best described the current pandemic situation in Jordan. Both approaches can be seen as complementary: the STF can cope with sudden daily changes in the pandemic whereas the LTF can be utilized to predict the upcoming waves' occurrence and strength. As such, the third approach was a hybrid forecast (HF) model merging both the STF and the LTF models. The HF was shown to be an efficient forecast model with excellent accuracy. It is evident that the decision to enforce the curfew at an early stage followed by the planned lockdown has been effective in eliminating a serious wave in April 2020. Vaccination has been effective in combating COVID-19 by reducing infection rates. Based on the forecasting results, there is some possibility that Jordan may face a third wave of the pandemic during the Summer of 2021.Peer reviewe

    Short-Term and Long-Term COVID-19 Pandemic Forecasting Revisited with the Emergence of OMICRON Variant in Jordan

    Get PDF
    Three simple approaches to forecast the COVID-19 epidemic in Jordan were previously proposed by Hussein, et al.: a short-term forecast (STF) based on a linear forecast model with a learning database on the reported cases in the previous 5–40 days, a long-term forecast (LTF) based on a mathematical formula that describes the COVID-19 pandemic situation, and a hybrid forecast (HF), which merges the STF and the LTF models. With the emergence of the OMICRON variant, the LTF failed to forecast the pandemic due to vital reasons related to the infection rate and the speed of the OMICRON variant, which is faster than the previous variants. However, the STF remained suitable for the sudden changes in epi curves because these simple models learn for the previous data of reported cases. In this study, we revisited these models by introducing a simple modification for the LTF and the HF model in order to better forecast the COVID-19 pandemic by considering the OMICRON variant. As another approach, we also tested a time-delay neural network (TDNN) to model the dataset. Interestingly, the new modification was to reuse the same function previously used in the LTF model after changing some parameters related to shift and time-lag. Surprisingly, the mathematical function type was still valid, suggesting this is the best one to be used for such pandemic situations of the same virus family. The TDNN was data-driven, and it was robust and successful in capturing the sudden change in +qPCR cases before and after of emergence of the OMICRON variant

    Short-Term and Long-Term COVID-19 Pandemic Forecasting Revisited with the Emergence of OMICRON Variant in Jordan

    Get PDF
    Three simple approaches to forecast the COVID-19 epidemic in Jordan were previously proposed by Hussein, et al.: a short-term forecast (STF) based on a linear forecast model with a learning database on the reported cases in the previous 5–40 days, a long-term forecast (LTF) based on a mathematical formula that describes the COVID-19 pandemic situation, and a hybrid forecast (HF), which merges the STF and the LTF models. With the emergence of the OMICRON variant, the LTF failed to forecast the pandemic due to vital reasons related to the infection rate and the speed of the OMICRON variant, which is faster than the previous variants. However, the STF remained suitable for the sudden changes in epi curves because these simple models learn for the previous data of reported cases. In this study, we revisited these models by introducing a simple modification for the LTF and the HF model in order to better forecast the COVID-19 pandemic by considering the OMICRON variant. As another approach, we also tested a time-delay neural network (TDNN) to model the dataset. Interestingly, the new modification was to reuse the same function previously used in the LTF model after changing some parameters related to shift and time-lag. Surprisingly, the mathematical function type was still valid, suggesting this is the best one to be used for such pandemic situations of the same virus family. The TDNN was data-driven, and it was robust and successful in capturing the sudden change in +qPCR cases before and after of emergence of the OMICRON variant

    Low-cost Air Quality Sensing Process: Validation by Indoor-Outdoor Measurements

    Get PDF
    Air pollution is a main challenge in societies with particulate matter PM2.5 as the major air pollutant causing serious health implications. Due to health and economic impacts of air pollution, low-cost and portable air quality sensors can be vastly deployed to gain personal air pollutant exposure. In this paper, we present an air quality sensing process needed for low-cost sensors which are planned for long-term use. The steps of this process include design and production, laboratory tests, field tests, deployment, and maintenance. As a case study we focus on the field test, where we use two generations of a portable air quality sensor (capable of measuring meteorological variables and PM2.5 to perform an indoor-outdoor measurement. The study found that all of the measurements shown to be consistent through validation among themselves. The sensors accuracy also demonstrate to be adequate by showing similar readings compared to the nearest air quality reference station.Peer reviewe
    corecore